Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 11(1): 376, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609400

RESUMO

The growing availability of smart meter data has facilitated the development of energy-saving services like demand response, personalized energy feedback, and non-intrusive-load-monitoring applications, all of which heavily rely on advanced machine learning algorithms trained on energy consumption datasets. To ensure the accuracy and reliability of these services, real-world smart meter data collection is crucial. The Plegma dataset described in this paper addresses this need bfy providing whole- house aggregate loads and appliance-level consumption measurements at 10-second intervals from 13 different households over a period of one year. It also includes environmental data such as humidity and temperature, building characteristics, demographic information, and user practice routines to enable quantitative as well as qualitative analysis. Plegma is the first high-frequency electricity measurements dataset in Greece, capturing the consumption behavior of people in the Mediterranean area who use devices not commonly included in other datasets, such as AC and electric-water boilers. The dataset comprises 218 million readings from 88 installed meters and sensors. The collected data are available in CSV format.

2.
Comput Biol Med ; 170: 108036, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38295478

RESUMO

Over the past five years, interest in the literature regarding the security of the Internet of Medical Things (IoMT) has increased. Due to the enhanced interconnectedness of IoMT devices, their susceptibility to cyber-attacks has proportionally escalated. Motivated by the promising potential of AI-related technologies to improve certain cybersecurity measures, we present a comprehensive review of this emerging field. In this review, we attempt to bridge the corresponding literature gap regarding modern cybersecurity technologies that deploy AI techniques to improve their performance and compensate for security and privacy vulnerabilities. In this direction, we have systematically gathered and classified the extensive research on this topic. Our findings highlight the fact that the integration of machine learning (ML) and deep learning (DL) techniques improves both the performance of cybersecurity measures and their speed, reliability, and effectiveness. This may be proven to be useful for improving the security and privacy of IoMT devices. Furthermore, by considering the numerous advantages of AI technologies as opposed to their core cybersecurity counterparts, including blockchain, anomaly detection, homomorphic encryption, differential privacy, federated learning, and so on, we provide a structured overview of the current scientific trends. We conclude with considerations for future research, emphasizing the promising potential of AI-driven cybersecurity in the IoMT landscape, especially in patient data protection and in data-driven healthcare.


Assuntos
Inteligência Artificial , Internet , Humanos , Reprodutibilidade dos Testes , Aprendizado de Máquina , Segurança Computacional
3.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3299-3307, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35108212

RESUMO

Understanding the dynamics of deforestation and land uses of neighboring areas is of vital importance for the design and development of appropriate forest conservation and management policies. In this article, we approach deforestation as a multilabel classification (MLC) problem in an endeavor to capture the various relevant land uses from satellite images. To this end, we propose a multilabel vision transformer model, ForestViT, which leverages the benefits of the self-attention mechanism, obviating any convolution operations involved in commonly used deep learning models utilized for deforestation detection. Experimental evaluation in open satellite imagery datasets yields promising results in the case of MLC, particularly for imbalanced classes, and indicates ForestViT's superiority compared with well-established convolutional structures (ResNET, VGG, DenseNet, and ModileNet neural networks). This superiority is more evident for minority classes.


Assuntos
Redes Neurais de Computação , Imagens de Satélites , Conservação dos Recursos Naturais/métodos
4.
Front Physiol ; 13: 924546, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36338484

RESUMO

Diabetic foot complications have multiple adverse effects in a person's quality of life. Yet, efficient monitoring schemes can mitigate or postpone any disorders, mainly by early detecting regions of interest. Nowadays, optical sensors and artificial intelligence (AI) tools can contribute efficiently to such monitoring processes. In this work, we provide information on the adopted imaging schemes and related optical sensors on this topic. The analysis considers both the physiology of the patients and the characteristics of the sensors. Currently, there are multiple approaches considering both visible and infrared bands (multiple ranges), most of them coupled with various AI tools. The source of the data (sensor type) can support different monitoring strategies and imposes restrictions on the AI tools that should be used with. This review provides a comprehensive literature review of AI-assisted DFU monitoring methods. The paper presents the outcomes of a large number of recently published scholarly articles. Furthermore, the paper discusses the highlights of these methods and the challenges for transferring these methods into a practical and trustworthy framework for sufficient remote management of the patients.

5.
Diagnostics (Basel) ; 12(10)2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-36292078

RESUMO

In this study, we propose a tensor-based learning model to efficiently detect abnormalities on digital mammograms. Due to the fact that the availability of medical data is limited and often restricted by GDPR (general data protection regulation) compliance, the need for more sophisticated and less data-hungry approaches is urgent. Accordingly, our proposed artificial intelligence framework utilizes the canonical polyadic decomposition to decrease the trainable parameters of the wrapped Rank-R FNN model, leading to efficient learning using small amounts of data. Our model was evaluated on the open source digital mammographic database INBreast and compared with state-of-the-art models in this domain. The experimental results show that the proposed solution performs well in comparison with the other deep learning models, such as AlexNet and SqueezeNet, achieving 90% ± 4% accuracy and an F1 score of 84% ± 5%. Additionally, our framework tends to attain more robust performance with small numbers of data and is computationally lighter for inference purposes, due to the small number of trainable parameters.

6.
Sensors (Basel) ; 22(15)2022 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-35957428

RESUMO

Non-intrusive load monitoring (NILM) is the task of disaggregating the total power consumption into its individual sub-components. Over the years, signal processing and machine learning algorithms have been combined to achieve this. Many publications and extensive research works are performed on energy disaggregation or NILM for the state-of-the-art methods to reach the desired performance. The initial interest of the scientific community to formulate and describe mathematically the NILM problem using machine learning tools has now shifted into a more practical NILM. Currently, we are in the mature NILM period where there is an attempt for NILM to be applied in real-life application scenarios. Thus, the complexity of the algorithms, transferability, reliability, practicality, and, in general, trustworthiness are the main issues of interest. This review narrows the gap between the early immature NILM era and the mature one. In particular, the paper provides a comprehensive literature review of the NILM methods for residential appliances only. The paper analyzes, summarizes, and presents the outcomes of a large number of recently published scholarly articles. Furthermore, the paper discusses the highlights of these methods and introduces the research dilemmas that should be taken into consideration by researchers to apply NILM methods. Finally, we show the need for transferring the traditional disaggregation models into a practical and trustworthy framework.


Assuntos
Algoritmos , Aprendizado de Máquina , Fenômenos Físicos , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador
7.
Stud Health Technol Inform ; 295: 566-569, 2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35773937

RESUMO

European and International cities face crucial global geopolitical, economic, environmental, and other changes. All these intensify threats to and inequalities in citizens' health. The implementation of Blue-Green Solutions in urban and rural areas have been broadly used to tackle the above challenges. The Mobile health (mHealth) technologies contribution in people's well-being has found to be significant. In addition, several mHealth applications have been used to support patients with mental health or cardiovascular diseases with very promising results. The patients' remote monitoring can be a valuable asset in chronic diseases management for patients suffering from diabetes, hypertension or arrhythmia, depression, asthma, allergies and others. The scope of this paper is to present the specifications, the design and the development of a mobile application which collects health-related and location data of users visiting areas with Blue-Green Solutions. The mobile application has been developed to record the citizens' and patients' physical activity and vital signs using wearable devices. The proposed application can also monitor patients physical, physiological, and emotional status as well as motivate them to engage in social and self-caring activities. Additional features include the analysis of the patients' behavior to improve self-management. The "HEART by BioAsssist" application could be used as a health and other data collection tool as well as an "intelligent assistant" to monitor and promote patient's physical activity.


Assuntos
Aplicativos Móveis , Autogestão , Telemedicina , Tecnologia Biomédica , Humanos , Saúde Pública , Autogestão/métodos , Telemedicina/métodos
8.
Stud Health Technol Inform ; 294: 939-940, 2022 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-35612248

RESUMO

The urban environment seems to affect the citizens' health. The implementation of Blue-Green Solutions (BGS) in urban areas have been used to promote public health and citizens well-being. The aim of this paper is to present the development of an mHealth app for monitoring patients and citizens health status in areas where BGS will be applied. The "HEART by BioAsssist" application could be used as a health and other data collection tool as well as an "intelligent assistant" to monitor and promote patient's physical activity in areas with Blue-Green Solutions.


Assuntos
Saúde Pública , Telemedicina , Humanos
9.
Sensors (Basel) ; 22(10)2022 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-35632066

RESUMO

COVID-19 evolution imposes significant challenges for the European healthcare system. The heterogeneous spread of the pandemic within EU regions elicited a wide range of policies, such as school closure, transport restrictions, etc. However, the implementation of these interventions is not accompanied by the implementation of quantitative methods, which would indicate their effectiveness. As a result, the efficacy of such policies on reducing the spread of the virus varies significantly. This paper investigates the effectiveness of using deep learning paradigms to accurately model the spread of COVID-19. The deep learning approaches proposed in this paper are able to effectively map the temporal evolution of a COVID-19 outbreak, while simultaneously taking into account policy interventions directly into the modelling process. Thus, our approach facilitates data-driven decision making by utilizing previous knowledge to train models that predict not only the spread of COVID-19, but also the effect of specific policy measures on minimizing this spread. Global models at the EU level are proposed, which can be successfully applied at the national level. These models use various inputs in order to successfully model the spatio-temporal variability of the phenomenon and obtain generalization abilities. The proposed models are compared against the traditional epidemiological and Autoregressive Integrated Moving Average (ARIMA) models.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/epidemiologia , Atenção à Saúde , Surtos de Doenças , Humanos , Pandemias
10.
Sensors (Basel) ; 22(8)2022 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-35458907

RESUMO

Non-Intrusive Load Monitoring (NILM) describes the process of inferring the consumption pattern of appliances by only having access to the aggregated household signal. Sequence-to-sequence deep learning models have been firmly established as state-of-the-art approaches for NILM, in an attempt to identify the pattern of the appliance power consumption signal into the aggregated power signal. Exceeding the limitations of recurrent models that have been widely used in sequential modeling, this paper proposes a transformer-based architecture for NILM. Our approach, called ELECTRIcity, utilizes transformer layers to accurately estimate the power signal of domestic appliances by relying entirely on attention mechanisms to extract global dependencies between the aggregate and the domestic appliance signals. Another additive value of the proposed model is that ELECTRIcity works with minimal dataset pre-processing and without requiring data balancing. Furthermore, ELECTRIcity introduces an efficient training routine compared to other traditional transformer-based architectures. According to this routine, ELECTRIcity splits model training into unsupervised pre-training and downstream task fine-tuning, which yields performance increases in both predictive accuracy and training time decrease. Experimental results indicate ELECTRIcity's superiority compared to several state-of-the-art methods.


Assuntos
Fontes de Energia Elétrica , Eletricidade
11.
Biomed Phys Eng Express ; 8(2)2022 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-35144242

RESUMO

Over the past few years, positron emission tomography/computed tomography (PET/CT) imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii) the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL) scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the users' feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous18F-FDG injection. Experimental results indicate the superiority of our approach compared to other state-of-the-art methods.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Fluordesoxiglucose F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia Computadorizada por Raios X
12.
Sensors (Basel) ; 21(6)2021 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-33810066

RESUMO

Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026).


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos
13.
IEEE Comput Graph Appl ; 40(4): 26-38, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32340939

RESUMO

Serious games are receiving increasing attention in the field of cultural heritage (CH) applications. A special field of CH and education is intangible cultural heritage and particularly dance. Machine learning (ML) tools are necessary elements for the success of a serious game platform since they introduce intelligence in processing and analysis of users' interactivity. ML provides intelligent scoring and monitoring capabilities of the user's progress in a serious game platform. In this article, we introduce a deep learning model for motion primitive classification. The model combines a convolutional processing layer with a bidirectional analysis module. This way, RGB information is efficiently handled by the hierarchies of convolutions, while the bidirectional properties of a long short term memory (LSTM) model are retained. The resulting convolutionally enhanced bidirectional LSTM (CEBi-LSTM) architecture is less sensitive to skeleton errors, occurring using low-cost sensors, such as Kinect, while simultaneously handling the high amount of detail when using RGB visual information.

14.
Comput Intell Neurosci ; 2019: 2859429, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30800156

RESUMO

Accurate prediction of the seawater intrusion extent is necessary for many applications, such as groundwater management or protection of coastal aquifers from water quality deterioration. However, most applications require a large number of simulations usually at the expense of prediction accuracy. In this study, the Gaussian process regression method is investigated as a potential surrogate model for the computationally expensive variable density model. Gaussian process regression is a nonparametric kernel-based probabilistic model able to handle complex relations between input and output. In this study, the extent of seawater intrusion is represented by the location of the 0.5 kg/m3 iso-chlore at the bottom of the aquifer (seawater intrusion toe). The initial position of the toe, expressed as the distance of the specific line from a number of observation points across the coastline, along with the pumping rates are the surrogate model inputs, whereas the final position of the toe constitutes the output variable set. The training sample of the surrogate model consists of 4000 variable density simulations, which differ not only in the pumping rate pattern but also in the initial concentration distribution. The Latin hypercube sampling method is used to obtain the pumping rate patterns. For comparison purposes, a number of widely used regression methods are employed, specifically regression trees and Support Vector Machine regression (linear and nonlinear). A Bayesian optimization method is applied to all the regressors, to maximize their efficiency in the prediction of seawater intrusion. The final results indicate that the Gaussian process regression method, albeit more time consuming, proved to be more efficient in terms of the mean absolute error (MAE), the root mean square error (RMSE), and the coefficient of determination (R 2).


Assuntos
Teorema de Bayes , Simulação por Computador , Distribuição Normal , Água do Mar , Monitoramento Ambiental , Água Doce , Água Subterrânea , Análise de Regressão
15.
Sensors (Basel) ; 19(1)2018 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-30583457

RESUMO

In this paper, we present WaterSpy, a project developing an innovative, compact, cost-effective photonic device for pervasive water quality sensing, operating in the mid-IR spectral range. The approach combines the use of advanced Quantum Cascade Lasers (QCLs) employing the Vernier effect, used as light source, with novel, fibre-coupled, fast and sensitive Higher Operation Temperature (HOT) photodetectors, used as sensors. These will be complemented by optimised laser driving and detector electronics, laser modulation and signal conditioning technologies. The paper presents the WaterSpy concept, the requirements elicited, the preliminary architecture design of the device, the use cases in which it will be validated, while highlighting the innovative technologies that contribute to the advancement of the current state of the art.

17.
Comput Intell Neurosci ; 2018: 7068349, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29487619

RESUMO

Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.


Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Humanos
18.
Comput Intell Neurosci ; 2017: 5891417, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29312449

RESUMO

Detection of outliers in radar signals is a considerable challenge in maritime surveillance applications. High-Frequency Surface-Wave (HFSW) radars have attracted significant interest as potential tools for long-range target identification and outlier detection at over-the-horizon (OTH) distances. However, a number of disadvantages, such as their low spatial resolution and presence of clutter, have a negative impact on their accuracy. In this paper, we explore the applicability of deep learning techniques for detecting deviations from the norm in behavioral patterns of vessels (outliers) as they are tracked from an OTH radar. The proposed methodology exploits the nonlinear mapping capabilities of deep stacked autoencoders in combination with density-based clustering. A comparative experimental evaluation of the approach shows promising results in terms of the proposed methodology's performance.


Assuntos
Algoritmos , Aprendizado de Máquina , Radar , Processamento de Sinais Assistido por Computador , Análise por Conglomerados , Humanos , Dinâmica não Linear , Razão Sinal-Ruído , Vento
19.
IEEE Trans Cybern ; 46(12): 2810-2824, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26552100

RESUMO

The millions of tweets submitted daily overwhelm users who find it difficult to identify content of interest revealing the need for event detection algorithms in Twitter. Such algorithms are proposed in this paper covering both short (identifying what is currently happening) and long term periods (reviewing the most salient recently submitted events). For both scenarios, we propose fuzzy represented and timely evolved tweet-based theoretic information metrics to model Twitter dynamics. The Riemannian distance is also exploited with respect to words' signatures to minimize temporal effects due to submission delays. Events are detected through a multiassignment graph partitioning algorithm that: 1) optimally retains maximum coherence within a cluster and 2) while allowing a word to belong to several clusters (events). Experimental results on real-life data demonstrate that our approach outperforms other methods.

20.
IEEE Trans Syst Man Cybern B Cybern ; 34(2): 1235-47, 2004 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15376867

RESUMO

Implementation of a commercial application to a grid infrastructure introduces new challenges in managing the quality-of-service (QoS) requirements, most stem from the fact that negotiation on QoS between the user and the service provider should strictly be satisfied. An interesting commercial application with a wide impact on a variety of fields, which can benefit from the computational grid technologies, is three-dimensional (3-D) rendering. In order to implement, however, 3-D rendering to a grid infrastructure, we should develop appropriate scheduling and resource allocation mechanisms so that the negotiated (QoS) requirements are met. Efficient scheduling schemes require modeling and prediction of rendering workload. In this paper workload prediction is addressed based on a combined fuzzy classification and neural network model. Initially, appropriate descriptors are extracted to represent the synthetic world. The descriptors are obtained by parsing RIB formatted files, which provides a general structure for describing computer-generated images. Fuzzy classification is used for organizing rendering descriptor so that a reliable representation is accomplished which increases the prediction accuracy. Neural network performs workload prediction by modeling the nonlinear input-output relationship between rendering descriptors and the respective computational complexity. To increase prediction accuracy, a constructive algorithm is adopted in this paper to train the neural network so that network weights and size are simultaneously estimated. Then, a grid scheduler scheme is proposed to estimate the queuing order that the tasks should be executed and the most appopriate processor assignment so that the demanded QoS are satisfied as much as possible. A fair scheduling policy is considered as the most appropriate. Experimental results on a real grid infrastructure are presented to illustrate the efficiency of the proposed workload prediction--scheduling algorithm compared to other approaches presented in the literature.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...